![]() METHOD FOR THE ORDERING OF TASKS AT THE KNOB LEVELS OF A COMPUTER CLUSTER, ORDERER OF TASKS AND CLUS
专利摘要:
The invention relates to a task scheduling method, at least some nodes (1) of a computer cluster, comprising: first, the launching (22) of two containers (5, 8) on each said nodes (1), a standard container (5) and a priority container (8), then for all or part of said two-container nodes (5), at each node (1), both that a priority task does not occur, the assignment of available resources or of the node (1) to its standard container (5) to execute a standard task, its priority container (8) not executing a task, when a priority task occurs, the dynamic switching (25) of only a portion of the resources of its standard container (5) to its priority container (8), so that, firstly the priority task is executed in the priority container (8) with the tilted part of the resources, and on the other hand the standard task continues to be ex run, without being stopped, in the standard container (5) with the untapped part of the resources. 公开号:FR3031203A1 申请号:FR1463319 申请日:2014-12-24 公开日:2016-07-01 发明作者:Yann Maupu;Thomas Cadeau;Matthieu Daniel 申请人:Bull SA; IPC主号:
专利说明:
[0001] TECHNICAL FIELD OF THE INVENTION The invention relates to a method for scheduling tasks at least some nodes of a computer. computer cluster, a task scheduler to be assigned to the nodes of a computer cluster, and a computer cluster including such a task scheduler. BACKGROUND OF THE INVENTION According to a first prior art, it is known to stop the execution of the standard task at the node in question, and to allocate the resources of the considered node to execute the priority task before being able to resume the task. execution of the standard task then. The disadvantage of this first prior art is to have to completely restart the interrupted standard task, all the processing already performed at this standard task being completely lost, or at least lost since the last synchronization point. However, such a processing, even since the last synchronization point, can easily represent several hours or even days of processing at a large number of nodes of a large computer cluster, which represents a mass of processing or a considerable calculation time, which are irremediably lost. In addition, some tasks are executed without any synchronization point. According to a second prior art, it is known to wait until the completion of this standard task, or at least its next synchronization point, and then to pass the priority task in front of all other standard tasks. waiting in a sequencing queue of these standard tasks. No standard task processing already done is lost. But, the priority task must wait otherwise the end of the standard task 5 running or at least its execution until the next synchronization point, which can be very long. The corresponding delay of a critical priority task can be very damaging. SUMMARY OF THE INVENTION The object of the present invention is to provide a method of scheduling tasks at the nodes of a computer cluster at least partially overcoming the aforementioned drawbacks. More particularly, the invention aims at providing a method for scheduling tasks at the nodes of a computer cluster, which method first of all launches at least two containers, a standard container and a priority container, on a node, then switches resources from the standard container to the priority container, when a priority task occurs while the node executes a standard task 20 in the standard container, in order to be able, on the one hand, to perform the priority task urgently in the container priority with sufficient resources, and secondly continue to perform the standard task, even at idle, without having to stop it and therefore without losing the work already done. In fact, a task stopped in progress must be completely restarted, all the work already done being lost. In this way, the management of the priority levels of the task scheduler is optimized, and the bandwidth of such a task scheduler is globally increased. The flexibility of using a cluster implementing this method of scheduling tasks at the cluster nodes is also significantly improved. [0002] To this end, the present invention proposes a task scheduling method, at least some nodes of a computer cluster, comprising: first, the launch of two containers on each of said nodes, a container; standard and a priority container, then, for all or part of said two-container nodes, at each node, as long as a priority task does not occur, the assignment of available resources or the node to its standard container for executing a standard task, its priority container not executing a task, when a priority task occurs, the dynamic switching of only a part of the resources from its standard container to its priority container, so that on the one hand the priority task is executed in the priority container with the flipped part of the resources, and on the other hand the standard task continues to be executed without being stopped, da ns the standard container with the unsaved part of the resources. [0003] The available resources of the node are the resources not required by the internal operation of the node or by the opening of the priority container. In the absence of a priority task, preferably the majority, if not most or almost all available resources are assigned to the standard container. [0004] The resources considered encompass at least the processor resources, and preferably also the RAM resources. The resources considered may optionally further include input / output resources and / or network resources. Said standard task can continue to be executed, without being stopped, until the termination of this standard task or until a next synchronization point of this standard task, thus making it possible not to lose the partial processing already carried out for this task. task. The important thing is not to stop the execution of the standard task in conditions where the work already performed for this standard task will be lost, but on the contrary, not to stop the execution of the standard task at all. either to stop the execution of the standard task only under conditions where the work already executed for this standard task can be recovered without losses. To this end, the present invention also proposes a computer cluster comprising: a plurality of nodes, a task scheduler to be assigned to said 5 nodes, configured, for at least some of said nodes, so as to: first, launch two containers on each of said nodes; nodes, a standard container and a priority container, then, for all or part of said two-container nodes, at each node, as long as a priority task does not occur, assign resources of the node to its standard container 10 to execute a standard task, its priority container not executing a task, when a priority task occurs, dynamically switch only part of the resources from its standard container to its priority container, so that, on the one hand, the priority task executed in the priority container with the flipped part of the resources, and on the other hand the standard task continues to be executed, its ns be stopped, in the standard container with the untipped part of the resources. To this end, the present invention further proposes a task scheduler to be assigned to the nodes of a computer cluster, configured for at least some of said nodes, so as to: first, launch two containers on each of said nodes, a standard container and a priority container, then, for all or part of said two-container nodes, at each node, as long as a priority task does not occur, assign resources of the node to its standard container to perform a standard task , its priority container not executing a task, when a priority task occurs, dynamically switch only part of the resources from its standard container to its priority container, so that, firstly, the priority task is executed in the priority container with the flipped part of the resources, and on the other hand the standard task continues to be executed, without being stopped, in the con standard tainer with the untapped part of the resources. [0005] 3031203 5 It might also be considered to install several virtual machines on each node of the cluster, but the management of several virtual machines on the same real node would result in a significant loss of the task processing performance at this real node. This loss is much greater in the case of management of a second virtual machine than in the case of management of a second container, an order of magnitude higher, for example about a factor of 10 (For example, performance losses of the order of 30% instead of performance losses of the order of 3%). [0006] 10 It could also be considered to install several "chroots" on each node of the cluster, but it would not be possible to dynamically switch resources from one chroot to another chroot, at a node performing a standard task and receiving a priority task while performing a standard task. [0007] According to preferred embodiments, the invention comprises one or more of the following features which may be used separately or in partial combination with one another or in total combination with each other, with any of the objects previously presented. . [0008] Preferably, as long as a priority task does not occur, all available resources of the node are assigned to its standard container. Preferably, as long as a priority task does not occur, at least 90%, preferably at least 95%, of the node resources are allocated to the standard container, and / or less than 5% of the node resources are allocated to the node. Priority container. Thus, in normal mode, that is to say in the absence of priority task to be performed urgently, cluster performance is similar to what it would have if the nodes did not each have two containers. Preferably, once the priority task is completed, the resources 30 having been switched from the standard container to the priority container are rolled back from the priority container to the standard container. Thus, once the cluster returns to normal mode, that is to say once the priority task is executed in urgency, the cluster performance returns to the same level as it would have if the nodes did not have two each. containers. [0009] Preferably, when a priority task occurs, resource failover is performed by one or more OS-level virtualization control groups disposed in the node's core layer. Thus, the sizing of resources assigned to the different containers, standard and priority, is performed at the core 10 of the host node of the containers, that is to say the node hosting the containers. Preferably, each container has its operating system allowing it to communicate directly with the core of the host node of these standard and priority containers, independently of the operating system of said host node. Thus, each container, standard and priority, can behave as a virtual node and be considered by the task scheduler as a virtual node. Preferably, said method is performed at the majority of the nodes of said computer cluster, preferably at all 20 nodes of said computer cluster. Thus, the cluster will have the ability to handle a priority task requiring a larger processing volume. Preferably, at least one of the two-container nodes, preferably several two-container nodes, more preferably the majority of the two-container nodes, still more preferably all the two-container nodes, are or are nodes. Calculation. It is indeed at the level of computing nodes that the occurrence of a priority task is most likely. Preferably, each container is a Linux container. Indeed, the Linux container on the one hand is fully adapted to be easily scaled dynamically when needed, and on the other hand 3031203 7 has a structure already manipulated by the task scheduler without significant modification of this task scheduler. Preferably, the resources include both the processor resources and the RAM resources of the node. These are the most important resources required to be able to urgently perform a priority task. Preferably, the resources also include the input / output resources and network resources of the node. Preferably, all the resources, processor and random access memory, of the same electronic chip of the node or the same processor or random access memory, are assigned to the same container, either all to the standard container or all to the priority container. . Thus, it avoids sharing processor resources or memory resources of the same chip between different containers, which would be more difficult to manage at the core of the host node. For example, in the case of two processors, one is assigned to the priority container while the other remains assigned to the standard container. For example, in the case of four processors, three are assigned to the priority container while one of the four remains assigned to the standard container. Preferably, the proportion of the resources to be switched from the standard container 20 to the priority container can be set by the cluster administrator. Thus, the cluster administrator, depending on the frequency and type of priority tasks that may occur, can optimize the proportion of resources to switch to perform in the required time the priority task while slowing down as little as possible the execution of the standard task. Preferably, when a priority task occurs, at least 50%, preferably at least 75% of the standard container's processor resources are switched to the priority container. Preferably, when a priority task occurs, at least 50%, preferably at least 75% of the RAM resources of the standard container are switched to the priority container. The priority task to be performed generally very quickly, a majority of the resources is then assigned to the priority container in which this priority task will be executed. Other features and advantages of the invention will appear on reading the following description of a preferred embodiment of the invention, given by way of example and with reference to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows schematically an example of a node on which two containers are launched according to one embodiment of the invention. FIG. 2 diagrammatically represents an example of node operation on which two containers are launched, according to one embodiment of the invention, as long as a priority task does not occur. FIG. 3 diagrammatically represents an example of node operation on which two containers have been launched, according to one embodiment of the invention, when a priority task occurs. Figure 4 schematically shows an example of the steps 20 of the task scheduling method according to one embodiment of the invention. DETAILED DESCRIPTION OF THE INVENTION FIG. 1 schematically represents an example of a node on which two containers are launched according to one embodiment of the invention. A node 1 comprises a hardware 2 ("hardware" in English), above which is disposed a kernel 3 ("kernel" in English), over which is disposed an operating system 4 ("OS"). distribution "for" Operating System Distribution "in English). The hardware 2 and the core 3 communicate with each other bidirectionally. The core 3 and the operating system 4 communicate with each other bidirectionally. The operating system 4 is called the host operating system because it is the operating system of node 1 itself. [0010] Core 3 comprises a control group 11 ("cgroup" for "control group" in English). A control group is a kernel element whose essential functions are to limit, count, and isolate the use of resources (including processor resources, memory resources, input / output resources, network resources) by the different groups of 10 processes. Memory resources are essentially RAM resources. On the node 1, which is a real node, two containers 5 and 8 are launched. A container ("container" in English language) is a form of virtualization of the execution environment (including notably a processor, random access memory, network, file system) where a set of isolated processes of the host machine runs; a container is preferably a Linux container ("LXC" or "Linux Container" in English). The control group 11 of the kernel 3 thus manages the distribution of resources between the two containers 5 and 8. A Linux container combines 20 control groups and namespaces to be able to constitute an isolated environment allowing the execution of tasks independently at the same time the node itself and the other or the other containers of this node. The container 5, which is a standard container, comprises an application 6 located above an operating system 7. The application 6 and the operating system 7 communicate with each other bidirectionally. The operating system 7 is the operating system of the container 5 which is independent of the operating system 4 of the node 1 and which communicates bidirectionally directly with the core 3 of the node 1. [0011] The container 8, which is a priority container, comprises an application 9 located above an operating system 10. The application 3031203 10 9 and the operating system 10 communicate with each other bidirectionally. The operating system 10 is the operating system of the container 8 which is independent of the operating system 4 of the node 1 and which communicates bidirectionally directly with the core 3 of the node 1. The operating systems 7 and 10 are shown identical to each other and different from the operating system 4, in Figure 1. But these operating systems 4, 7 and 10 could also be all identical to each other, or all different from each other. Examples of the operating system for containers 5 and 8 are SUSE or REDHAT or UBUNTU. FIG. 2 diagrammatically represents an example of node operation on which two containers are launched, according to one embodiment of the invention, as long as a priority task does not occur. The distribution of resources described corresponds to the distribution of, for example, processor resources. But the distribution of other resources, including RAM, input / output, network, is performed in a similar manner. The different resources can be distributed differently, for example 75% / 25% between the two containers for the 20 processor resources, with the largest part allocated to the priority container, and 50% / 50% between the two containers for the resources. RAM. FIG. 2 corresponds to the normal execution of a standard task at standard container 5. Thus, as long as a priority task does not occur, the available processor resources of node 1 are assigned to its standard container 5 to execute a standard task, its priority container 8 not executing a task. A small portion of the processor resources is still allocated to the priority container 8 to be able to open it and keep it idle, this small part often remaining less than 5% of the processor resources of the node 1, and worth advantageously between 2 % and 3% of the processor resources of node 1. [0012] 3031203 11 The absence of the application box 9 shows that the priority container 8 does not execute a task. A large part of the processor resources is then allocated to the standard container 5, which executes a standard task, this strong part often remaining greater than 95% of the processor resources of the node 1, and advantageously worth between 97% and 98% of the resources. processor node 1. This distribution of processor resources in favor of the standard container 5 which performs a standard task at the expense of the priority container 8 which does not perform task was performed by the control group 11 of the core 3. [0013] FIG. 3 diagrammatically represents an example of node operation on which two containers have been launched, according to one embodiment of the invention, when a priority task occurs. A priority task is provided by the batch scheduler, not shown in the figure, which manages the queue of tasks waiting to be executed at the nodes in general and at the level of the network. node 1 in particular. When a priority task occurs, the control group 11 of the core 3 performs a dynamic switchover of only a portion of the processor resources from its standard container 5 to its priority container 8, so that, on the one hand, the priority task is performed in the priority container 8 with the flipped part of the processor resources, and secondly the standard task continues to be executed, without being stopped, in the standard container 5 with the non-flipped portion of the processor resources. [0014] Between FIGS. 2 and 3, it can be seen that 70% of the processor resources of the node 1 have been switched from the standard container 5 to the priority container 8. Thus, in this priority mode represented in FIG. 3, the priority container 8 can realize quickly and efficiently its priority task with 75% of the processor resources of the node 1, while the standard container 5 continues to slow down its standard task with 3031203 12 only 25% of the processor resources of the node 1 instead of the 95% of which it in the normal mode shown in FIG. 2. Once the priority task has been completed in the priority container 8, the resources switched from the standard container 5 to the priority container 58 are shifted from the priority container 8 to the standard container 5, in order to end. again to the normal mode configuration shown in Figure 2, allowing the standard container to continue performing its standard task with 95% of the processor resources, until it is complete or until a new priority task arrives at node 1. The simultaneous or concurrent arrival of two priority tasks at the level of a node on which a standard task is already running is very rare. To manage this case, it is possible to open not two but three or more containers. However, opening many idle containers most of the time uses resources unnecessarily and drags the overall performance of the node. Therefore, preferably only two containers are launched on the same node, and no more. The second priority task should then be put on hold until the end of the first priority task. A priority task generally takes less time, or much less time than a standard task, to equal use of node resources. FIG. 4 schematically represents an example of the progress of the steps of the task scheduling method according to one embodiment of the invention. [0015] The task scheduling method successively proceeds with a step 20 of setting the proportion of nodes concerned, a step 21 of parameterizing the proportion of resources, a step 22 of launching the two containers, a step 23 of normal execution. a standard task, the occurrence of a priority task during the execution of the standard task, a step of switching resources, a step 26 of executing in parallel the priority task and the standard task 3031203 slowed down, the termination 27 of the execution of the priority task, a step 28 of reversal of resources, a step 29 of normal execution of the standard task. In step 20 of setting the proportion of nodes concerned 5 by the simultaneous launch of two containers, the cluster administrator decides on the number and the type of nodes on which two containers will be simultaneously launched, a standard container to execute the standard tasks and a priority container to perform priority tasks. The task scheduler of the cluster will see, at the node, in fact two virtual nodes constituted by the two containers, standard and priority, of the node. The other nodes will work classically and will be seen by the task scheduler each as a real node. In step 21 of setting the proportion of resources, on each node, for each type of resource, including processor resource and RAM resource and input / output resources and network resources, will be parameterized the distribution of resources between the container standard and priority container, in case of priority task occurrence, most of the resources remaining assigned to the standard container as such a priority task does not occur. For example, when a priority task occurs, the control group resizes the containers so that the priority container begins to have about 75% of the processor resources, 50% of the RAM resources, 25% of the resources entered / outputs, 50% of network resources, while the standard container will keep about 25% of CPU resources, 25 50% of RAM resources, 75% of input / output resources, 50% of network resources. Preferably, the distribution of resources between standard container and priority container is identical or similar for all the nodes concerned, but it can be different per group of nodes or even vary from one node to another. Optionally, it is possible that one of the resource types, for example the network resources, is not at all intended to be switched from the standard container to the priority container, when a priority task occurs, if the The type of priority tasks that may occur never requires such resources. In step 22 of launching the two containers, the two containers, a standard container and a priority container are launched on each of the nodes concerned of the cluster. In step 23 of normal execution of a standard task, the standard container, having at least 95% of the available resources, less than 5% being assigned to the priority container not executing a task, normally unwinds execution of a standard task, so at normal speed. When a priority task occurs during the execution of the standard task, the control group prepares to perform the resources switching provided in step 21 of setting the proportion of resources. [0016] In the step of switching resources, the allocation of resources between the two containers is rebalanced in favor of the priority container which takes a significant part or even preferably the majority of the resources previously allocated to the standard container, while leaving the container However, there are enough resources to be able to continue performing its standard task, even in an idle mode, so that all the work already done for the standard task by the standard container is not lost but is conserved. In step 26 of parallel execution of the priority task and the slowed standard task, on the one hand the priority task is executed in the priority container with the resources flipped, and on the other hand the standard task continues to to be executed in the standard container with reduced resources, namely the resources not tilted, in an idle mode. Upon completion of the execution of the priority task, the control group prepares to revert from the priority container to the standard container resources that had been previously switched from the standard container to the priority container. In step 28 of reversal of resources, the priority container retains only less than 5% of available resources to remain open and ready to execute a future priority task, while the standard container takes up more than 95% of available resources to continue performing the standard task that he had never stopped, but this time again at normal speed and no longer at a slow speed as in step 26 of parallel execution of the tasks. [0017] In step 29 of normal execution of the standard task, the standard task is executed in the standard container with most, if not all, of the resources of the node, as in step 23, and this, until the arrival of a new priority task which resumes the process at the level of priority task occurrence previously described. Of course, the present invention is not limited to the examples and to the embodiment described and shown, but it is capable of numerous variants accessible to those skilled in the art.
权利要求:
Claims (16) [0001] REVENDICATIONS1. Task scheduling method, at least some nodes (1) of a computer cluster, comprising: first, launching (22) two containers (5, 8) on each of said nodes (1) a standard container (5) and a priority container (8), then, for all or part of said two-container nodes (5), at each node (1), as long as a task priority does not occur, the assignment of available resources or of the node (1) to its standard container (5) to execute a standard task, its priority container (8) not executing a task, o when a priority task the dynamic switching (25) of only a part of the resources of its standard container (5) to its priority container (8) occurs, so that, on the one hand, the priority task is executed in the priority container ( 8) with the flipped part of the resources, and on the other hand the standard task continues to be executed without being stopped, in the standard container (5) with the untagged part of the resources. [0002] Task sequencing method according to claim 1, characterized in that, as long as a priority task does not occur, all the available resources of the node (1) are assigned to its standard container (5). [0003] 3. Task sequencing method according to claim 1 or 2, characterized in that, as long as a priority task does not occur, at least 90%, preferably at least 95%, of the resources of the node (1). are assigned to the standard container (5), and / or less than 5% of the resources of the node (1) are assigned to the priority container (8). 5 [0004] 4. Task sequencing method according to any one of the preceding claims, characterized in that, once the priority task has been completed (27), the resources that have been switched from the standard container (5) to the priority container (8) are rebasculées (28) of the priority container (8) to the standard container (5). 10 [0005] Task sequencing method according to one of the preceding claims, characterized in that, when a priority task occurs (24), the switching (25) of the resources is carried out by one or more control groups (11). operating system level virtualization arranged in the core layer (3) of the node (1). [0006] Task sequencing method according to one of the preceding claims, characterized in that each container (5, 8) has its operating system (7, 10) enabling it to communicate directly with the core (3). the node (1) host of these standard containers (5) and priority (8), regardless of the operating system (4) of said node (1) host. [0007] 7. Task sequencing method according to any one of the preceding claims, characterized in that said method is performed at the majority of the nodes (1) of said computer cluster, preferably at all nodes (1). said computer cluster. 30 [0008] 8. Task sequencing method according to any one of the preceding claims, characterized in that at least one of 3031203 18 nodes (1) with two containers (5, 8), preferably several nodes (1) to two containers (5, 8), more preferably the majority of the nodes (1) with two containers (5, 8), still more preferably all the nodes (1) with two containers (5, 8), is or are computing nodes. 5 [0009] 9. Task sequencing method according to any one of the preceding claims, characterized in that each container (5, 8) is a Linux container. 10 [0010] 10. A method of sequencing tasks according to any one of the preceding claims, characterized in that the resources comprise both the processor resources and the RAM resources of the node (1). 15 [0011] 11. Task sequencing method according to claim 10, characterized in that all the resources, processor and RAM, of the same chip of the node or the same processor or RAM receptacle are assigned to the same container. either all in the standard container (5) or all in the priority container (8). 20 [0012] A task sequencing method according to any one of the preceding claims, characterized in that the resources also include the input / output resources and the network resources of the node (1). 25 [0013] 13. Task sequencing method according to any one of the preceding claims, characterized in that the proportion of resources to switch from the standard container (5) to the priority container (8) is configurable by the administrator of the cluster. 3031203 19 [0014] A task sequencing method according to any of the preceding claims, characterized in that, when a priority task occurs (24), at least 50%, preferably at least 75% of the standard container's processor resources (5). ) are switched to the priority container (8), and / or, when a priority task occurs (24), at least 50%, preferably at least 75% of the RAM resources of the standard container (5) are switched to the priority container (8). 10 [0015] A computer cluster comprising: - several nodes (1), a task scheduler to be assigned to said nodes (1), configured for at least some of said nodes (1), so as to: o first, launch (22) two containers (5, 8) on each of said nodes (1), a standard container (5) and a priority container (8), then, for all or part of said nodes (1) with two containers (5, 8) at each node (1), as long as a priority task does not occur, assigning resources from the node (1) to its standard container (5) to execute a standard task, its priority container (8) n executing no task, ^ when a priority task occurs, dynamically switching (25) only part of the resources from its standard container (5) to its priority container (8), so that on the one hand the priority task is executed in the priority container (8) with the tilted part of the resources, and on the other hand the task standard 30 continues to be executed, without being stopped, in the standard container (5) with the untagged portion of the resources. [0016] 16. Task scheduler to be assigned to the nodes (1) of a computer cluster, configured, for at least some of said nodes (1), so as to: first, launch (22) two containers (5, 8) on each of said nodes (1), a standard container (5) and a priority container (8), then, for all or part of said nodes (1) with two containers (5, 10 8), at each node (1 ), as long as a priority task does not occur, assigning resources from the node (1) to its standard container (5) to execute a standard task, its priority container (8) not executing a task, 15 o when a priority task occurs, dynamically switching (25) only part of the resources from its standard container (5) to its priority container (8), so that, on the one hand, the priority task is executed in the priority container (8) with the part 20 toggled resources, and secondly the standard task continues to be executed, s to be stopped, in the standard container (5) with the untipped part of the resources.
类似技术:
公开号 | 公开日 | 专利标题 EP3238056B1|2021-09-29|Method for organising tasks in the nodes of a computer cluster, associated task organiser and cluster EP1805611B1|2017-11-29|Task processing scheduling method and device for implementing same EP2550597B1|2018-12-05|Method, computer program and device for optimizing loading and booting of an operating system in a computer system via a communication network EP2401676A1|2012-01-04|Allocation and monitoring unit EP2320325A1|2011-05-11|Direct access memory controller with multiple sources, corresponding method and computer program FR3025907B1|2019-07-26|MECHANISM AND METHOD FOR PROVIDING COMMUNICATION BETWEEN A CLIENT AND A SERVER BY ACCESSING SHARED MEMORY MESSAGE DATA. WO2015004207A1|2015-01-15|Method for optimising the parallel processing of data on a hardware platform CA2887077A1|2015-10-07|Data treatment system for graphical interface and graphical interface comprising such a data treatment system WO2012110445A1|2012-08-23|Device for accelerating the execution of a c system simulation FR3003366A1|2014-09-19|METHOD, DEVICE AND COMPUTER PROGRAM FOR THE AUTOMATIC INSTALLATION OR DEINSTALLATION OF SOFTWARE MODULES IN AIRCRAFT EQUIPMENT OF AN AIRCRAFT EP2545449B1|2020-08-12|Method for configuring an it system, corresponding computer program and it system FR3071334B1|2019-08-30|METHOD FOR ENSURING DATA STABILITY OF A MULTICOAL PROCESSOR OF A MOTOR VEHICLE EP2530586B1|2017-12-20|Method for generating software EP2860630A1|2015-04-15|Data transfer method in a dynamic environment EP2956874B1|2017-03-15|Device and method for accelerating the update phase of a simulation kernel FR3045866A1|2017-06-23|COMPUTER COMPRISING A MULTI-HEART PROCESSOR AND A CONTROL METHOD EP3660677A1|2020-06-03|Method and device for monitoring software application| with buffer time period preceding a section reserved for a set of shared resource|, associated computer program and avionics system WO2017103116A1|2017-06-22|Predictive analysis method and calculator FR3075431A1|2019-06-21|DEVICE, DATA PROCESSING CHAIN AND CONTEXT SWITCHING METHOD FR2961923A1|2011-12-30|DEVICE, DATA PROCESSING CHAIN AND METHOD, AND CORRESPONDING COMPUTER PROGRAM WO2018001956A1|2018-01-04|Calculation architecture in particular for an embedded aeronautical system FR3019336A1|2015-10-02|METHOD AND DEVICE FOR CONTROLLING OPERATING SYSTEM CHANGE IN SERVICE NODES OF A HIGH PERFORMANCE COMPUTER WO2016116574A1|2016-07-28|Method for managing the execution of tasks and processor for implementing said method FR3036207A1|2016-11-18|METHOD FOR MANAGING TASK EXECUTION BY A PROCESSOR AND ONE OR MORE COPROMERS FR3042619A1|2017-04-21|EQUIPMENT AND OPERATING SYSTEM
同族专利:
公开号 | 公开日 US10698729B2|2020-06-30| WO2016102818A1|2016-06-30| EP3238056A1|2017-11-01| FR3031203B1|2017-03-24| EP3238056B1|2021-09-29| US20180004570A1|2018-01-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 GB1452865A|1973-11-21|1976-10-20|Ibm|Data processing apparatus| US8176490B1|2004-08-20|2012-05-08|Adaptive Computing Enterprises, Inc.|System and method of interfacing a workload manager and scheduler with an identity manager| WO2013024818A1|2011-08-15|2013-02-21|国立大学法人大阪大学|Frequency shifter and frequency shifting method| US20140237477A1|2013-01-18|2014-08-21|Nec Laboratories America, Inc.|Simultaneous scheduling of processes and offloading computation on many-core coprocessors| WO2014143791A1|2013-03-15|2014-09-18|Informatica Corporation|Efficiently performing operations on distinct data values| US20030061260A1|2001-09-25|2003-03-27|Timesys Corporation|Resource reservation and priority management| US7529822B2|2002-05-31|2009-05-05|Symantec Operating Corporation|Business continuation policy for server consolidation environment| US7707578B1|2004-12-16|2010-04-27|Vmware, Inc.|Mechanism for scheduling execution of threads for fair resource allocation in a multi-threaded and/or multi-core processing system| US9058198B2|2012-02-29|2015-06-16|Red Hat Inc.|System resource sharing in a multi-tenant platform-as-a-service environment in a cloud computing system| US9792152B2|2013-12-20|2017-10-17|Red Hat Israel, Ltd.|Hypervisor managed scheduling of virtual machines|US10565016B2|2016-09-20|2020-02-18|International Business Machines Corporation|Time frame bounded execution of computational algorithms| US10909136B1|2017-02-08|2021-02-02|Veritas Technologies Llc|Systems and methods for automatically linking data analytics to storage| US10360053B1|2017-02-14|2019-07-23|Veritas Technologies Llc|Systems and methods for completing sets of computing tasks| US10685033B1|2017-02-14|2020-06-16|Veritas Technologies Llc|Systems and methods for building an extract, transform, load pipeline| US10216455B1|2017-02-14|2019-02-26|Veritas Technologies Llc|Systems and methods for performing storage location virtualization| US10606646B1|2017-03-13|2020-03-31|Veritas Technologies Llc|Systems and methods for creating a data volume from within a software container and initializing the data volume with data| US10540191B2|2017-03-21|2020-01-21|Veritas Technologies Llc|Systems and methods for using dynamic templates to create application containers| EP3502890A1|2017-12-22|2019-06-26|Bull SAS|Method for managing resources of a computer cluster by means of historical data| US10740132B2|2018-01-30|2020-08-11|Veritas Technologies Llc|Systems and methods for updating containers|
法律状态:
2015-11-23| PLFP| Fee payment|Year of fee payment: 2 | 2016-07-01| PLSC| Publication of the preliminary search report|Effective date: 20160701 | 2016-11-21| PLFP| Fee payment|Year of fee payment: 3 | 2017-11-21| PLFP| Fee payment|Year of fee payment: 4 | 2018-12-31| PLFP| Fee payment|Year of fee payment: 5 | 2019-12-24| PLFP| Fee payment|Year of fee payment: 6 | 2020-12-29| PLFP| Fee payment|Year of fee payment: 7 | 2021-12-15| PLFP| Fee payment|Year of fee payment: 8 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1463319A|FR3031203B1|2014-12-24|2014-12-24|METHOD FOR THE ORDERING OF TASKS AT THE KNOB LEVELS OF A COMPUTER CLUSTER, ORDERER OF TASKS AND CLUSTER ASSOCIATED|FR1463319A| FR3031203B1|2014-12-24|2014-12-24|METHOD FOR THE ORDERING OF TASKS AT THE KNOB LEVELS OF A COMPUTER CLUSTER, ORDERER OF TASKS AND CLUSTER ASSOCIATED| PCT/FR2015/053533| WO2016102818A1|2014-12-24|2015-12-16|Method for organising tasks in the nodes of a computer cluster, associated task organiser and cluster| EP15821139.1A| EP3238056B1|2014-12-24|2015-12-16|Method for organising tasks in the nodes of a computer cluster, associated task organiser and cluster| US15/539,965| US10698729B2|2014-12-24|2015-12-16|Method for organizing tasks in the nodes of a computer cluster, associated task organizer and cluster| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|